Considering the parameters that are detected as outliers as simply the overdrives of the circuits given so that they are driven into some kind of resonance, we can replace them to understand and substitute for those outliers with the Quartiles.

Looks like there is some particular orientations within the parameters that tend to go more towards right for higher qualtiy signals, on an average, and towards to the left for lower quality signals. The points, along with their mean points seem to be particularly oriented thus. We need to check it out by groupings.

Parameter 2 has no correlation with any of the other parameters. So, it looks like a real independent and non-redundant column and hence of crucial and very important. But the distribution plot of the parameters indicates otherwise stating that most probably the parameter doesn't play any role at all in determining whether the Signal has proper strength or not.

The same Story aplies for Parameter 4 and Parameter 5. Scatterplots speak the same in this case too.

Most of the action seems to be happening with Parameter 1, in connection with Parameter8, Parameter8, Parameter9.

Thre is a only a very little difference between using Neural Network as a regressor and Neural network as a classifier. Only the output layer and the softmax function that changes the scenario. So,keeping that in the mind, we can first go for training it as a regressor, then we can make a small modification to our original data and make it as an classifier column and then go for the classifier Neural Network.

please note

Please note that the Object Oriented approach and the classification has been avoided completely and the process is simplified a lot. The approach is shifted to the part3 of the project where it becomes a necessity to go for class approach. The first two parts of the project that we are dealing with here are considered as a starting point, a stepping stone, for the part3 of the project. Hence, from the point of view of maximum flexibility and freedom from my perspective, it has been thought that these models will be envisaged to be used in that file as it is.

Approach Taken

Though there went a number of trials into developing these models, all of them are not noted or continued here. So, there are three trials with respect to both categorical and regressional approach to the data.

First Trial is without the validation set involved. Second trial is with the validation set invovled. Third trial is the performance metrics after performing some PCA because we saw some dimensions were pretty useless so, reduce the dimensions in the data and take thes same again. Fourth trial is nothing but one of the samples of the iterations that I took i.e. things like changing the number of layers, the batch size, the number of epochs etc.

Categorical Neural Network - Trial 1

Regressional Neural Network - Trial 1

Some Experimentation

We will drop some columns that show some redundancy and check if the model improves, and if so how. We will also try to do some PCA on the parameters to make them good.

Categorical Neural Network - Trial 2

Regression Neural Network - Trial 2

Looks like some serious overfitting going on with Categorical Neural Network.

Some Principal Component Analysis Let us do some principal component analysis to eliminate some unknown or redundant dimensions in the data, and redo the whole thing again.

Categorical Neural Network - Trial 3

Regression Neural Network - Trial 3

Obviously, doing some PCA is not helping at all. Number of itreations with possible changes have been done but they haven't been documented here. Some more testing for a sample has been taken below.

One glaring thing is that data is not at all sufficient we want to train our model because batch size of 50 and 100 epochs has already cleaned the dataset which contains just 1599 input points. Hence we need more data to hit that above 80 accuracy.

Modifying some options and Testing

Here we have decreased the batch size and increased the number of epochs, but still we couldn't hit a peak accuracy of greater than 0.45. Let us try the same with regression model where we got initial accuracy above 50.

Reducing the number of layers and Neurons here checking it out

Please Note We can certainly make a lot of imporvements upon what is presented here but it required a lot of experimentation and other options which includes but not restricted to the concept of convolution and a lot of other things. I think it is beyond the scope of the given problem statement and the mark weightage. In future projects advanced options might be explored.

Most of the observations are pretty clear needn't required a good description whether here and there a little statement has been sprinkled out. Most of the explanations are self-explanatory either in the beginning or within the itself. Thanks for the valuable feedback.